Skip to content

feat(workflows): add maintainer-standup workflow for daily PR/issue triage#1428

Merged
Wirasm merged 2 commits intodevfrom
feat/maintainer-standup-workflow
Apr 27, 2026
Merged

feat(workflows): add maintainer-standup workflow for daily PR/issue triage#1428
Wirasm merged 2 commits intodevfrom
feat/maintainer-standup-workflow

Conversation

@Wirasm
Copy link
Copy Markdown
Collaborator

@Wirasm Wirasm commented Apr 27, 2026

Summary

  • Problem: Maintainers (especially the main maintainer, with ~66 open PRs at time of writing) have no pre-built way to start their day with a prioritized, direction-aligned view of the queue.
  • Why it matters: Manual morning triage is slow, drifts in priority criteria across days, and rarely produces written history. PRs not aligned with project direction sit too long.
  • What changed: New maintainer-standup workflow + 3 gather scripts + a synthesis command file + a per-maintainer config layout under .archon/maintainer-standup/ (direction.md committed, profile/state/briefs gitignored).
  • What did NOT change (scope boundary): No engine, package, or platform-adapter code touched. No existing workflows modified. Only .archon/ user content + one .gitignore block + one ESLint global-ignore line.

UX Journey

Before

Maintainer                                    GitHub
──────────                                    ──────
opens laptop, sips coffee
scrolls `gh pr list`
re-derives priority from memory ◀──────────── 66 open PRs, all equal weight
context-switches to direction.md
mentally cross-references
forgets what was hot yesterday
picks something to work on

After

Maintainer                                       Archon                                    GitHub
──────────                                       ──────                                    ──────
runs `archon workflow run maintainer-standup`──▶ git-status, gh-data, read-context (parallel scripts)
                                                 ──────────────────────────────────────▶  fetches PRs/issues/closed
                                                 reads direction.md + profile.md + last 3 briefs + state.json
                                                 [synthesize node, Sonnet, output_format]
                                                 - classifies P1-P4 against direction.md
                                                 - cites direction clauses for declines
                                                 - diffs against prior run's observed_prs
                                                 - ages carry-over items across runs
                                                 [persist node, inline bun script]
                                                 - writes briefs/YYYY-MM-DD.md
                                                 - writes state.json
reads briefs/<today>.md ◀─────────────────────── (the prose brief)
acts on P1 list

Architecture Diagram

Before

.archon/
├── commands/        (existing)
├── scripts/         (existing — 2 echo demos)
├── workflows/       (existing — 11 workflows)
└── state/           (existing — repo-triage cross-run memory)

After

.archon/
├── commands/
│   └── maintainer-standup.md                  [+]   synthesis prompt
├── maintainer-standup/                        [+]   per-workflow folder
│   ├── direction.md                           [+]   committed, neutral north-star
│   ├── README.md                              [+]   setup docs
│   ├── profile.md.example                     [+]   committed template
│   ├── profile.md                             [+]   gitignored, per-maintainer
│   ├── state.json                             [+]   gitignored, auto-written
│   └── briefs/YYYY-MM-DD.md                   [+]   gitignored, auto-written
├── scripts/
│   ├── maintainer-standup-git-status.ts       [+]   bun, fetches origin/dev
│   ├── maintainer-standup-gh-data.ts          [+]   bun, gh queries
│   └── maintainer-standup-read-context.ts     [+]   bun, reads local state
└── workflows/
    └── maintainer-standup.yaml                [+]   DAG: 3 gathers ── synthesize ── persist

.gitignore                                     [~]   3 new patterns under .archon/maintainer-standup/
eslint.config.mjs                              [~]   1 new line: ignore .archon/**

Connection inventory:

From To Status Notes
.archon/workflows/maintainer-standup.yaml 3 gather scripts in .archon/scripts/ new parallel layer 0
3 gather scripts .archon/maintainer-standup/{state.json, profile.md, direction.md, briefs/} new scripts read local state
.archon/commands/maintainer-standup.md gather node outputs ($nodeId.output) new Sonnet synthesis with output_format
persist node .archon/maintainer-standup/{state.json, briefs/<date>.md} new inline bun script, JSON-as-JS-literal pattern
ESLint typed-rule pipeline .archon/** modified ignored — not in any tsconfig project

Label Snapshot

  • Risk: risk: low
  • Size: size: M
  • Scope: workflows
  • Module: workflows:maintainer-tooling

Change Metadata

  • Change type: feature
  • Primary scope: workflows

Linked Issue

Validation Evidence (required)

archon validate workflows maintainer-standup    # → ok (1 valid, 0 errors)
bun run lint                                    # → clean
bun run type-check                              # → clean across all 10 packages
bun run format:check                            # → "All matched files use Prettier code style!"
  • Evidence provided: Workflow validator passes; lint/type/format checks pass. End-to-end run completed successfully on the second invocation (DAG resume skipped the four already-completed nodes and persisted state cleanly in 15ms after the YAML fix).
  • Skipped commands and why: bun run test skipped — this PR adds only .archon/ user content + one .gitignore block + one ESLint ignore line. No source code modified, so no test coverage is at risk. Run on a follow-up if reviewer prefers.

Security Impact (required)

  • New permissions/capabilities? No.
  • New external network calls? Yes — gh pr list/view, gh issue list/view, git fetch origin. All already used by other bundled Archon tooling (e.g. repo-triage). No new auth, no new endpoints.
  • Secrets/tokens handling changed? No — uses existing gh auth as the user that invoked the workflow.
  • File system access scope changed? No — writes confined to .archon/maintainer-standup/ (already a user-scoped directory).

Compatibility / Migration

  • Backward compatible? Yes — additive.
  • Config/env changes? No.
  • Database migration needed? No.

Human Verification (required)

What was personally validated beyond CI:

  • Verified scenarios:
    • First run with no prior state — synthesizes baseline brief, snapshots observed_prs for next-run diffing (~5.4 min Sonnet synthesis on 66 PRs).
    • Failed-and-resumed flow — initial run failed in persist due to a String.raw template-literal collision with backticks in brief_markdown; reran cleanly via DAG resume in 15ms after fixing the inline script. Brief content and state file both written correctly.
    • Validator (archon validate workflows maintainer-standup) passes after each change.
  • Edge cases checked:
    • Missing state.jsonread-context.ts returns prior_state: null; synthesis falls back to first-run baseline.
    • Dirty working tree on devgit-status.ts logs pull_status: dirty and skips pull, still reports new commits since prior SHA.
    • Missing gh_handle in profile.mdgh-data.ts warns to stderr and skips review-requested / authored / assigned queries; the open-PR list still returns.
    • Backtick-bearing markdown in synthesized output → no longer breaks persist (root-cause: filed in docs/examples: String.raw $nodeId.output pattern is fragile when output contains backticks #1427 to fix the misleading example pattern).
  • What was not verified: invocation from a non-git directory (workflow is git-required by design and the CLI rejects this); behavior on a fresh clone with no gh auth login (would fail at gh-data.ts with a clear stderr message).

Side Effects / Blast Radius (required)

  • Affected subsystems/workflows: none — net-new files. The ESLint ignore line for .archon/** matches the existing .claude/skills/** precedent and drops typed-lint coverage that was previously broken anyway (no tsconfig project covers .archon/).
  • Potential unintended effects: if any contributor was relying on ESLint covering .archon/scripts/*.ts, that lint coverage is removed. In practice it could not have worked — await-thenable and other typed rules require parser-services that the global ignore now confirms are not configured for .archon/.
  • Guardrails / monitoring: archon validate workflows runs in this PR's CI and on every workflow invocation. The failed-and-resumed run exercised the standard DAG resume path and persisted cleanly.

Rollback Plan (required)

  • Fast rollback command/path: git revert <merge-commit-sha> — single commit, 10 files, all additive (except 2 small modified files). All gitignored runtime artifacts under .archon/maintainer-standup/ (profile/state/briefs) are local-only and unaffected.
  • Feature flags or config toggles: none — the workflow is opt-in by name; not running it changes nothing.
  • Observable failure symptoms: archon validate workflows maintainer-standup would fail; CI catches. Runtime failure of any node surfaces in standard workflow run output and is auto-resumable.

Risks and Mitigations

  • Risk: synthesis prompt may produce malformed JSON if Claude drifts from output_format schema — would fail at persist node again.
    • Mitigation: output_format is SDK-enforced on Claude; the failed-and-resumed flow this PR went through proves the persist node can be fixed and re-run without losing the (expensive) synthesis work via DAG resume.
  • Risk: Pi-provider users expecting to run this workflow on a cheaper model — output_format is best-effort on Pi and could silently parse-fail on a deeply nested schema.
    • Mitigation: workflow pins provider: claude + model: sonnet. README does not advertise Pi compatibility. Cross-provider work can come as a follow-up if Pi/Minimax demonstrates schema reliability.
  • Risk: per-maintainer profile/state coupling — each contributor's local state divergence could surprise them after a long absence.
    • Mitigation: state.json is overwritten each run; briefs/ is purely additive history. Worst case is "the first run of the day reads an old state and reports lots of carry-over deltas," which is informational, not destructive.

Summary by CodeRabbit

  • New Features
    • Added a daily Maintainer Standup that synthesizes git/GitHub activity into a single brief with prioritized P1–P4 triage, carry‑over handling, and surfaced direction questions.
  • Documentation
    • New README, direction guide, and profile template explaining workflow usage and how to tune triage behavior.
  • Chores
    • Persisted briefs/state, updated ignore and lint rules to exclude per‑maintainer standup files.

…riage

Daily morning briefing that pulls origin/dev, triages all open PRs and assigned
issues against direction.md, and surfaces progress vs. the previous run. Designed
for live-checkout use (worktree.enabled: false) so it can read its own state.

Layout under .archon/maintainer-standup/:
  - direction.md (committed) — project north-star: what Archon IS / IS NOT.
    Drives PR P4 polite-decline classification with cited clauses.
  - README.md / profile.md.example — setup docs and template for new maintainers.
  - profile.md, state.json, briefs/YYYY-MM-DD.md — gitignored, per-maintainer.

Engine:
  - 3 parallel gather scripts in .archon/scripts/maintainer-standup-*.ts
    (git-status, gh-data, read-context) — bun runtime, JSON stdout.
  - Synthesis node: command file with output_format schema for
    { brief_markdown, next_state }.
  - Persist node: tiny inline bun script writes both to disk.

Run-to-run continuity: state.json carries observed_prs/issues snapshots, so the
next run can detect what merged, what closed, what the maintainer shipped, and
which carry-over items aged past N days.

Also adds .archon/** to the ESLint global ignore list (matches the existing
.claude/skills/** pattern) since .archon/ is user content and not part of any
tsconfig project.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 27, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6b7895d8-59ce-4df6-9c08-a779fe342303

📥 Commits

Reviewing files that changed from the base of the PR and between b492400 and 4db48b0.

📒 Files selected for processing (4)
  • .archon/commands/maintainer-standup.md
  • .archon/scripts/maintainer-standup-gh-data.ts
  • .archon/scripts/maintainer-standup-git-status.ts
  • .archon/workflows/maintainer-standup.yaml

📝 Walkthrough

Walkthrough

Adds a maintainer standup system: three data-gathering Bun/TypeScript scripts (git status, GitHub data, read context), a workflow that runs them in parallel and invokes a synthesis step, templates and docs for triage (P1–P4) and carry-over state, and persistence of brief markdown and next_state JSON.

Changes

Cohort / File(s) Summary
Workflow & Command
.archon/workflows/maintainer-standup.yaml, .archon/commands/maintainer-standup.md
New workflow that triggers daily/morning briefs, runs three collection scripts in parallel, enforces synthesis output contract (brief_markdown + next_state), and persists state and dated brief files.
Data Collection Scripts
.archon/scripts/maintainer-standup-git-status.ts, .archon/scripts/maintainer-standup-gh-data.ts, .archon/scripts/maintainer-standup-read-context.ts
Adds three Bun/TypeScript scripts: git-status (fetch/pull, dev SHA, dirty state, new commits/diff), gh-data (queries gh for PRs/issues, authored commits, recent closed items), read-context (loads direction.md, profile.md, prior state, recent briefs).
Documentation & Config
.archon/maintainer-standup/README.md, .archon/maintainer-standup/direction.md, .archon/maintainer-standup/profile.md.example
Introduces README, direction guidance for triage and citation rules, and a profile example for per-maintainer settings.
Persistence & Ignoring
.gitignore, eslint.config.mjs, .archon/maintainer-standup/*
Ignores per-maintainer profile, state, and briefs in git; adds .archon/** to ESLint ignore so new scripts/docs are excluded from linting.

Sequence Diagram

sequenceDiagram
    participant Trigger as Workflow Trigger
    participant GS as GitStatus (script)
    participant GH as GH-Data (script)
    participant RC as ReadContext (script)
    participant Synth as Synthesis (command)
    participant FS as FileSystem

    Trigger->>GS: start (parallel)
    Trigger->>GH: start (parallel)
    Trigger->>RC: start (parallel)

    GS->>FS: read prior state.json
    GS->>FS: git fetch/pull, compute SHA, dirty, commits/diff
    GS-->>Synth: emit git-status.output (JSON)

    GH->>FS: read profile.md, prior state.json
    GH->>GH: run gh queries (PRs, issues), git log for author
    GH-->>Synth: emit gh-data.output (JSON)

    RC->>FS: read direction.md, profile.md, prior state, recent briefs
    RC-->>Synth: emit read-context.output (JSON)

    Synth->>Synth: compare prior vs current snapshots
    Synth->>Synth: compute resolved/carry-over, assign P1–P4, surface direction questions
    Synth-->>FS: write next_state (state.json)
    Synth-->>FS: write brief markdown (briefs/YYYY-MM-DD.md)
    Synth-->>Trigger: complete
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~35 minutes

Poem

🐇 I hopped through commits at break of day,

Gathered PRs and issues in tidy array,
Tagged them P1 through P4 with care,
Carried the slow ones like carrots to bear,
A brief for the maintainer — fresh as spring air.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat(workflows): add maintainer-standup workflow for daily PR/issue triage' directly and clearly summarizes the main change—the addition of a new maintainer-standup workflow for daily PR/issue triage.
Description check ✅ Passed The description is comprehensive and well-structured, covering all key template sections: Summary (problem, impact, changes, scope), UX Journey (before/after flows), Architecture Diagram (module layout and connections), Change Metadata (type, scope), Validation Evidence (with test results and reasoning for skips), Security Impact (network calls and auth), Compatibility (backward-compatible), Human Verification (tested scenarios and edge cases), Side Effects, Rollback Plan, and Risks/Mitigations.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/maintainer-standup-workflow

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (4)
.archon/scripts/maintainer-standup-git-status.ts (2)

17-24: Prefer execFileSync with argv arrays over execSync for git commands.

execSync invokes a shell and (at L58/L61) interpolates priorSha directly into the command string. priorSha is read from state.json, which is written by the workflow itself, so the practical risk is low — but using execFileSync with an argv array eliminates shell parsing entirely and aligns with the project convention of avoiding exec for git calls. It also means dropping the shell quotes around --format=....

🛠 Proposed refactor
-import { execSync } from 'node:child_process';
+import { execFileSync } from 'node:child_process';
@@
-function run(cmd: string): { stdout: string; ok: boolean } {
+function run(args: string[]): { stdout: string; ok: boolean } {
   try {
-    const out = execSync(cmd, { stdio: ['ignore', 'pipe', 'pipe'] }).toString();
+    const out = execFileSync('git', args, { stdio: ['ignore', 'pipe', 'pipe'] }).toString();
     return { stdout: out, ok: true };
   } catch {
     return { stdout: '', ok: false };
   }
 }
@@
-run('git fetch origin dev');
+run(['fetch', 'origin', 'dev']);
@@
-const currentBranch = run('git rev-parse --abbrev-ref HEAD').stdout.trim();
-const isDirty = run('git status --porcelain').stdout.trim().length > 0;
+const currentBranch = run(['rev-parse', '--abbrev-ref', 'HEAD']).stdout.trim();
+const isDirty = run(['status', '--porcelain']).stdout.trim().length > 0;
@@
-  const result = run('git pull --ff-only origin dev');
+  const result = run(['pull', '--ff-only', 'origin', 'dev']);
@@
-const currentDevSha = run('git rev-parse origin/dev').stdout.trim();
+const currentDevSha = run(['rev-parse', 'origin/dev']).stdout.trim();
@@
-  const log = run(`git log ${priorSha}..origin/dev --no-decorate --format="%h %an: %s"`);
+  const log = run(['log', `${priorSha}..origin/dev`, '--no-decorate', '--format=%h %an: %s']);
   if (log.ok) {
     newCommits = log.stdout;
-    diffStat = run(`git diff --stat ${priorSha}..origin/dev`).stdout;
+    diffStat = run(['diff', '--stat', `${priorSha}..origin/dev`]).stdout;

Based on learnings: Use archon/git functions for git operations; use execFileAsync (not exec) when calling git directly. (archon/git doesn't apply here since .archon/scripts must stay monorepo-free, but the execFile recommendation still holds.)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.archon/scripts/maintainer-standup-git-status.ts around lines 17 - 24, The
run function currently uses execSync with a shell string; change it to use
child_process.execFileSync (or execFileSync) and pass git and its arguments as
an argv array instead of a shell command to avoid shell interpolation; update
the implementation referenced by the run function so calls that previously
interpolated priorSha or used quoted format flags become execFileSync('git',
['log', '--format=...'] , { encoding: 'utf8', stdio: ['ignore','pipe','pipe']
})-style calls (i.e., drop shell quoting and pass each flag/arg as array
elements) and preserve the returned shape { stdout: string; ok: boolean } and
error handling.

56-65: Validate priorSha shape before interpolating into git ranges.

If state.json ends up with a last_dev_sha that isn't a valid object name (e.g., empty after trim, contains whitespace, was hand-edited), git log <bad>..origin/dev will fail and the script falls back to "(prior SHA not found locally — full diff unavailable)" — which is the correct fallback, but it would be cleaner to short-circuit before invoking git. A simple /^[0-9a-f]{7,40}$/i check on priorSha would also harden the path against any future upstream change to how state is populated.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.archon/scripts/maintainer-standup-git-status.ts around lines 56 - 65,
Validate the priorSha value before using it in git range commands: in the block
that compares priorSha and currentDevSha, add a check using a regex like
/^[0-9a-f]{7,40}$/i against priorSha and if it fails, skip calling run(`git log
...`) and run(`git diff ...`) and set newCommits to the existing fallback string
and leave diffStat empty (or the current fallback behavior); update the code
around the priorSha/currentDevSha check and use the run(...) calls, newCommits,
and diffStat variables referenced in the snippet so invalid or empty priorSha
values never get interpolated into git commands.
.archon/scripts/maintainer-standup-gh-data.ts (1)

33-39: Frontmatter regex is unscoped — picks up any line in the file.

The /m flag makes ^gh_handle:\s*(\S+)\s*$ match anywhere, not just inside the leading ------ block. If profile.md later includes an example or quoted snippet with gh_handle: someone, that line will win. Cheap fix is to slice the frontmatter first:

Proposed fix
-  const profile = readFileSync(profilePath, 'utf8');
-  const match = profile.match(/^gh_handle:\s*(\S+)\s*$/m);
-  if (match) ghHandle = match[1];
+  const profile = readFileSync(profilePath, 'utf8');
+  const fm = profile.match(/^---\r?\n([\s\S]*?)\r?\n---/);
+  const match = (fm?.[1] ?? profile).match(/^gh_handle:\s*(\S+)\s*$/m);
+  if (match) ghHandle = match[1];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.archon/scripts/maintainer-standup-gh-data.ts around lines 33 - 39, The
frontmatter regex is unscoped and currently searches the whole file via
profile.match; instead, read the file into profile (already done) but first
extract the YAML frontmatter bounded by leading '---' and the next '---' (or
EOF) and run the gh_handle regex against that slice only; update the code around
profilePath/profile/readFileSync and replace the profile.match(...) usage with a
match on the sliced frontmatter so ghHandle is set only from the top-level
frontmatter block.
.archon/commands/maintainer-standup.md (1)

20-38: Add a language to the upstream-output fenced blocks (MD040).

markdownlint-cli2 flags lines 20, 28, and 36. Use text (or json if appropriate) so docs lint stays clean.

Proposed fix
-```
+```text
 $git-status.output

Apply the same change to the `$gh-data.output` and `$read-context.output` fences.
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @.archon/commands/maintainer-standup.md around lines 20 - 38, The fenced code
blocks for the upstream outputs are missing a language tag and failing
markdownlint (MD040); update the three fences that wrap $git-status.output,
$gh-data.output, and $read-context.output to include a language identifier (e.g.
use text or json as appropriate) so each block reads like text followed by the existing block contents and a closing , ensuring lint passes for
MD040.


</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>🤖 Prompt for all review comments with AI agents</summary>

Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.archon/commands/maintainer-standup.md:

  • Line 71: The P1 rule references CI status that isn’t being fetched: update the
    data gatherer or the doc to stay consistent. Either add the statusCheckRollup
    GraphQL field to the prFields array in maintainer-standup-gh-data.ts so
    gh-data.output.all_open_prs includes CI/status info (so the model can apply
    "green CI" without extra gh calls), or soften the P1 wording in
    .archon/commands/maintainer-standup.md to require “CI green per gh pr checks if
    you drill in” (or similar) so prompts match current inputs; pick one approach
    and apply it consistently (modify prFields to include statusCheckRollup if you
    want automated checks, otherwise adjust the P1 sentence).

In @.archon/scripts/maintainer-standup-gh-data.ts:

  • Around line 76-79: The current creation of allOpenPrs via parseJson(run(gh pr list --state open --limit 100 --json ${prFields}), []) can silently truncate
    results; change the gh invocation to fetch the full set (e.g. use --paginate
    and a high --limit such as 1000: run(gh pr list --state open --limit 1000 --paginate --json ${prFields})) or implement truncation detection: after
    running the query check the returned count against the requested limit and, if
    equal, emit a stderr warning and include a truncated: true flag alongside
    observed_prs so synthesis can handle incomplete snapshots; update the code paths
    that populate observed_prs (the allOpenPrs parseJson call and similar
    recently_closed_prs/recently_closed_issues invocations) accordingly.
  • Around line 15-22: The run function currently shells out via execSync(cmd)
    which allows shell injection; change it to use child_process.execFileSync(file,
    args, options) and update all call sites that build shell strings (the places
    that currently interpolate ghHandle and lastRunAt) to pass the executable (e.g.,
    "gh") and an args array with ghHandle and lastRunAt as distinct elements instead
    of interpolating them into one cmd string; validate ghHandle right after parsing
    (variable ghHandle) with the GitHub username regex
    /^a-zA-Z0-9?$/ and reject or sanitize invalid values,
    and keep equivalent stdio/error handling and the same fallback return ('[]') in
    run (or a renamed helper) while including the caught error message in the error
    output.

In @.archon/workflows/maintainer-standup.yaml:

  • Around line 117-144: The script currently overwrites briefs/.md and
    doesn't handle write failures; update the logic around briefsDir/briefPath to
    (1) detect if briefPath exists and, on collision, append an incrementing numeric
    suffix like -1, -2 until a non-existent filepath is found before calling
    writeFileSync, and (2) wrap the writeFileSync calls (for both state.json and the
    brief file) in a try/catch that on error prints the full data.brief_markdown to
    stdout (console.log) and exits non-zero so the synthesized output is not lost;
    reference the existing symbols writeFileSync, briefPath, briefsDir,
    data.brief_markdown, existsSync, and mkdirSync when making the changes.
  • Around line 136-138: The filename uses UTC via new
    Date().toISOString().slice(0, 10), which can produce the wrong local calendar
    day; change the date generation to use the local date instead (e.g., compute
    local year/month/day from new Date() or a helper like
    toLocaleDateString('sv-SE')) so the variable date, used when building briefPath
    (resolve(briefsDir, ${date}.md)) reflects the maintainer's local "today"
    before calling writeFileSync; update the date assignment in that block to use
    the local-date helper.

Nitpick comments:
In @.archon/commands/maintainer-standup.md:

  • Around line 20-38: The fenced code blocks for the upstream outputs are missing
    a language tag and failing markdownlint (MD040); update the three fences that
    wrap $git-status.output, $gh-data.output, and $read-context.output to include a
    language identifier (e.g. use text or json as appropriate) so each block
    reads like text followed by the existing block contents and a closing ,
    ensuring lint passes for MD040.

In @.archon/scripts/maintainer-standup-gh-data.ts:

  • Around line 33-39: The frontmatter regex is unscoped and currently searches
    the whole file via profile.match; instead, read the file into profile (already
    done) but first extract the YAML frontmatter bounded by leading '---' and the
    next '---' (or EOF) and run the gh_handle regex against that slice only; update
    the code around profilePath/profile/readFileSync and replace the
    profile.match(...) usage with a match on the sliced frontmatter so ghHandle is
    set only from the top-level frontmatter block.

In @.archon/scripts/maintainer-standup-git-status.ts:

  • Around line 17-24: The run function currently uses execSync with a shell
    string; change it to use child_process.execFileSync (or execFileSync) and pass
    git and its arguments as an argv array instead of a shell command to avoid shell
    interpolation; update the implementation referenced by the run function so calls
    that previously interpolated priorSha or used quoted format flags become
    execFileSync('git', ['log', '--format=...'] , { encoding: 'utf8', stdio:
    ['ignore','pipe','pipe'] })-style calls (i.e., drop shell quoting and pass each
    flag/arg as array elements) and preserve the returned shape { stdout: string;
    ok: boolean } and error handling.
  • Around line 56-65: Validate the priorSha value before using it in git range
    commands: in the block that compares priorSha and currentDevSha, add a check
    using a regex like /^[0-9a-f]{7,40}$/i against priorSha and if it fails, skip
    calling run(git log ...) and run(git diff ...) and set newCommits to the
    existing fallback string and leave diffStat empty (or the current fallback
    behavior); update the code around the priorSha/currentDevSha check and use the
    run(...) calls, newCommits, and diffStat variables referenced in the snippet so
    invalid or empty priorSha values never get interpolated into git commands.

</details>

<details>
<summary>🪄 Autofix (Beta)</summary>

Fix all unresolved CodeRabbit comments on this PR:

- [ ] <!-- {"checkboxId": "4b0d0e0a-96d7-4f10-b296-3a18ea78f0b9"} --> Push a commit to this branch (recommended)
- [ ] <!-- {"checkboxId": "ff5b1114-7d8c-49e6-8ac1-43f82af23a33"} --> Create a new PR with the fixes

</details>

---

<details>
<summary>ℹ️ Review info</summary>

<details>
<summary>⚙️ Run configuration</summary>

**Configuration used**: defaults

**Review profile**: CHILL

**Plan**: Pro

**Run ID**: `9e7688c4-5595-4fc8-94cb-c905623122ef`

</details>

<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between b286ad97d88b74c96e7526371c6cc0ec16c80efb and b492400c5703803c119de9838d187bacdefc8a5a.

</details>

<details>
<summary>📒 Files selected for processing (10)</summary>

* `.archon/commands/maintainer-standup.md`
* `.archon/maintainer-standup/README.md`
* `.archon/maintainer-standup/direction.md`
* `.archon/maintainer-standup/profile.md.example`
* `.archon/scripts/maintainer-standup-gh-data.ts`
* `.archon/scripts/maintainer-standup-git-status.ts`
* `.archon/scripts/maintainer-standup-read-context.ts`
* `.archon/workflows/maintainer-standup.yaml`
* `.gitignore`
* `eslint.config.mjs`

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

Comment thread .archon/commands/maintainer-standup.md Outdated
Comment on lines +15 to +22
function run(cmd: string): string {
try {
return execSync(cmd, { stdio: ['ignore', 'pipe', 'pipe'] }).toString();
} catch (e) {
process.stderr.write(`gh command failed: ${cmd}\n${(e as Error).message}\n`);
return '[]';
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

gh CLI: does gh pr list --json paginate beyond --limit, and what is the maximum --limit accepted?

💡 Result:

No, gh pr list --json does not automatically paginate beyond the value specified by --limit (default 30). It respects the --limit as the maximum number of items to fetch and does not fetch additional pages automatically. Users must use gh api graphql --paginate for full pagination if needed. There is no documented maximum for --limit, but practical limits apply due to GraphQL constraints (e.g., node limits when including fields like commits) and GitHub Search API caps (1000 results max when using --label or --search). Examples show --limit=50000 or --limit=1000 being attempted, but effectiveness depends on filters and fields requested. For large limits without search filters, it uses paginated GraphQL queries (first:100 per page), but stops at the specified --limit.

Citations:


🏁 Script executed:

cat -n .archon/scripts/maintainer-standup-gh-data.ts | sed -n '1,160p'

Repository: coleam00/Archon

Length of output: 5801


Switch execSync to execFileSync with argument arrays to eliminate shell injection.

The code interpolates ghHandle (parsed as \S+ at line 37) and lastRunAt (from state.json at line 50) directly into shell commands at lines 88, 94, 100, 140. Quoted variables in shell still permit command substitution ($(), backticks, ${...}), and line 88 is unquoted. If profile.md or state.json ever contain shell metacharacters, the script executes arbitrary commands.

Replace execSync(cmd, ...) with execFileSync(file, args, ...) and pass ghHandle and lastRunAt as separate array elements. Example at lines 76–79:

Proposed fix
-import { execSync } from 'node:child_process';
+import { execFileSync } from 'node:child_process';
@@
-function run(cmd: string): string {
+function run(file: string, args: string[]): string {
   try {
-    return execSync(cmd, { stdio: ['ignore', 'pipe', 'pipe'] }).toString();
+    return execFileSync(file, args, { stdio: ['ignore', 'pipe', 'pipe'] }).toString();
   } catch (e) {
-    process.stderr.write(`gh command failed: ${cmd}\n${(e as Error).message}\n`);
+    process.stderr.write(`command failed: ${file} ${args.join(' ')}\n${(e as Error).message}\n`);
     return '[]';
   }
 }
@@
-const allOpenPrs = parseJson<unknown[]>(
-  run(`gh pr list --state open --limit 100 --json ${prFields}`),
-  [],
-);
+const allOpenPrs = parseJson<unknown[]>(
+  run('gh', ['pr', 'list', '--state', 'open', '--limit', '100', '--json', prFields]),
+  [],
+);

Apply similarly to lines 88, 94, 100, 136–140.

At minimum, validate ghHandle against GitHub's username rule (/^[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,38})$/) immediately after parsing (line 38).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.archon/scripts/maintainer-standup-gh-data.ts around lines 15 - 22, The run
function currently shells out via execSync(cmd) which allows shell injection;
change it to use child_process.execFileSync(file, args, options) and update all
call sites that build shell strings (the places that currently interpolate
ghHandle and lastRunAt) to pass the executable (e.g., "gh") and an args array
with ghHandle and lastRunAt as distinct elements instead of interpolating them
into one cmd string; validate ghHandle right after parsing (variable ghHandle)
with the GitHub username regex /^[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,38})?$/ and reject
or sanitize invalid values, and keep equivalent stdio/error handling and the
same fallback return ('[]') in run (or a renamed helper) while including the
caught error message in the error output.

Comment on lines +76 to +79
const allOpenPrs = parseJson<unknown[]>(
run(`gh pr list --state open --limit 100 --json ${prFields}`),
[],
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

--limit 100 can silently truncate all_open_prs and break the observed_prs invariant.

The synthesis command (.archon/commands/maintainer-standup.md Phase 3) requires next_state.observed_prs to include every entry in all_open_prs; that property is what makes "resolved since last run" detection correct on subsequent runs. With --limit 100, once the open-PR count exceeds 100 the snapshot is silently incomplete and PRs that fall off the tail will be misclassified as "resolved" the next day.

Suggested options:

  • Use --paginate (and a more permissive --limit) to fetch the full set, e.g. gh pr list --state open --limit 1000 --json ...gh paginates internally for --json queries.
  • Or detect saturation and emit a stderr warning + a truncated: true flag in the output so the synthesis prompt can disclose it instead of silently dropping items.

The same concern applies, more mildly, to --limit 50 on recently_closed_prs/recently_closed_issues (lines 121, 127) when since_date is far in the past after a long break.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.archon/scripts/maintainer-standup-gh-data.ts around lines 76 - 79, The
current creation of allOpenPrs via parseJson(run(`gh pr list --state open
--limit 100 --json ${prFields}`), []) can silently truncate results; change the
gh invocation to fetch the full set (e.g. use `--paginate` and a high `--limit`
such as 1000: run(`gh pr list --state open --limit 1000 --paginate --json
${prFields}`)) or implement truncation detection: after running the query check
the returned count against the requested limit and, if equal, emit a stderr
warning and include a `truncated: true` flag alongside observed_prs so synthesis
can handle incomplete snapshots; update the code paths that populate
observed_prs (the allOpenPrs parseJson call and similar
recently_closed_prs/recently_closed_issues invocations) accordingly.

Comment on lines +117 to +144
script: |
import { writeFileSync, mkdirSync, existsSync } from 'node:fs';
import { resolve } from 'node:path';

// JSON is valid JS expression syntax — substitute directly without a
// template literal. Wrapping in String.raw breaks if the output contains
// backticks (e.g. markdown code spans inside brief_markdown).
const data = $synthesize.output;

const baseDir = resolve(process.cwd(), '.archon/maintainer-standup');
if (!existsSync(baseDir)) mkdirSync(baseDir, { recursive: true });

writeFileSync(
resolve(baseDir, 'state.json'),
JSON.stringify(data.next_state, null, 2) + '\n',
);

const briefsDir = resolve(baseDir, 'briefs');
if (!existsSync(briefsDir)) mkdirSync(briefsDir, { recursive: true });
const date = new Date().toISOString().slice(0, 10);
const briefPath = resolve(briefsDir, `${date}.md`);
writeFileSync(briefPath, data.brief_markdown);

console.log(JSON.stringify({
date,
state_path: '.archon/maintainer-standup/state.json',
brief_path: `.archon/maintainer-standup/briefs/${date}.md`,
}));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Persist node silently overwrites and has no fallback if writes fail.

Two minor reliability gaps in the inline persist script:

  1. If briefs/<date>.md already exists (re-run on the same day), it's silently overwritten — losing the earlier brief without warning.
  2. There's no try/catch around writeFileSync; a transient I/O failure after the LLM synthesis discards the (expensive) brief output.

Consider appending a numeric suffix on collision and printing the full brief_markdown to stdout on write failure so the run isn't a total loss.

🛠 Suggested approach
       writeFileSync(
         resolve(baseDir, 'state.json'),
         JSON.stringify(data.next_state, null, 2) + '\n',
       );

       const briefsDir = resolve(baseDir, 'briefs');
       if (!existsSync(briefsDir)) mkdirSync(briefsDir, { recursive: true });
-      const date = new Date().toISOString().slice(0, 10);
-      const briefPath = resolve(briefsDir, `${date}.md`);
-      writeFileSync(briefPath, data.brief_markdown);
+      const date = /* local YYYY-MM-DD */;
+      let briefPath = resolve(briefsDir, `${date}.md`);
+      let n = 2;
+      while (existsSync(briefPath)) {
+        briefPath = resolve(briefsDir, `${date}-${n}.md`);
+        n++;
+      }
+      try {
+        writeFileSync(briefPath, data.brief_markdown.endsWith('\n') ? data.brief_markdown : data.brief_markdown + '\n');
+      } catch (err) {
+        process.stderr.write(`Failed to write brief (${String(err)}); dumping to stdout:\n`);
+        process.stdout.write(data.brief_markdown);
+        throw err;
+      }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.archon/workflows/maintainer-standup.yaml around lines 117 - 144, The script
currently overwrites briefs/<date>.md and doesn't handle write failures; update
the logic around briefsDir/briefPath to (1) detect if briefPath exists and, on
collision, append an incrementing numeric suffix like `-1`, `-2` until a
non-existent filepath is found before calling writeFileSync, and (2) wrap the
writeFileSync calls (for both state.json and the brief file) in a try/catch that
on error prints the full data.brief_markdown to stdout (console.log) and exits
non-zero so the synthesized output is not lost; reference the existing symbols
writeFileSync, briefPath, briefsDir, data.brief_markdown, existsSync, and
mkdirSync when making the changes.

Comment thread .archon/workflows/maintainer-standup.yaml Outdated
- gh-data: bump --limit 100 → 1000 on all_open_prs and warn loudly when
  the cap is hit; preserves the observed_prs invariant the next-run
  "resolved since last run" diff depends on. (CodeRabbit critical)
- maintainer-standup.md: clarify P1 CI signal — the gathered payload only
  carries mergeStateStatus, not statusCheckRollup; for borderline P1s,
  drill in via `gh pr checks <n>`. (CodeRabbit minor)
- workflow.yaml persist: write briefs under local YYYY-MM-DD (sv-SE
  locale) instead of UTC ISO date, so an evening run doesn't file
  tomorrow's brief and break recent_briefs lookups. (CodeRabbit minor)
- workflow.yaml persist: wrap state/brief writes in try/catch; on
  failure dump brief_markdown and next_state to stderr so a 5-minute
  Sonnet synthesis isn't lost to a transient disk error. (CodeRabbit minor)
- gh-data + git-status: switch from execSync (shell-string) to
  execFileSync (argv array) for git/gh invocations. Defense-in-depth
  against shell metacharacters in values that pass through (esp. the
  gh_handle from profile.md). (CodeRabbit nitpick)
@Wirasm Wirasm merged commit d35b193 into dev Apr 27, 2026
3 of 4 checks passed
@Wirasm Wirasm deleted the feat/maintainer-standup-workflow branch April 27, 2026 08:37
Wirasm added a commit that referenced this pull request Apr 29, 2026
* chore: update Homebrew formula for v0.3.9

* chore(release-skill): use --help (not version) for Step 1.5 smoke probe (#1359)

The pre-flight binary smoke does a bare `bun build --compile` — it
deliberately skips `scripts/build-binaries.sh` to stay fast. That means
packages/paths/src/bundled-build.ts retains its dev defaults, including
BUNDLED_IS_BINARY = false.

version.ts branches on BUNDLED_IS_BINARY: when true it returns the
embedded string; when false it calls getDevVersion(), which reads
package.json at `SCRIPT_DIR/../../../../package.json`. Inside a compiled
binary SCRIPT_DIR resolves under `$bunfs/root/`, the walk produces a CWD-
relative path that doesn't exist, and the smoke aborts with "Failed to
read version: package.json not found" — a false positive.

Hit during the 0.3.8 release attempt: the real Pi lazy-load fix was
working end-to-end; the smoke test was the only thing failing.

Use --help instead. It exercises the same module-init graph (so it still
catches the real failure modes the skill lists — Pi package.json init
crash, Bun --bytecode bugs, CJS wrapper issues, circular imports under
minify) but has no dev/binary branch, so no false positive.

Also add a longer comment block explaining why --help is preferred, so
this doesn't get "normalized" back to `version` by a future drive-by.

* chore(test-release-skill): preserve archon-stable across test cycles

The brew path of /test-release runs `brew uninstall` in Phase 5 to leave the
system in its pre-test state. For operators using the dual-homebrew pattern
(renamed brew binary at `/opt/homebrew/bin/archon-stable` so it coexists with
a `bun link` dev `archon`), that uninstall wipes the Cellar dir the
`archon-stable` symlink points into → `archon-stable` becomes dangling →
`brew cleanup` sweeps it away on the next brew op. Next time the operator
wants stable, they have to manually re-run `brew-upgrade-archon`.

Fix: make the skill aware of `archon-stable` and restore it transparently.

- Phase 2 item 4: detect the `archon-stable` symlink before any brew op;
  export `ARCHON_STABLE_WAS_INSTALLED=yes` so Phase 5 knows to restore it.
  Only triggers for the brew path (curl-mac/curl-vps don't touch brew so
  they leave `archon-stable` alone).
- Phase 5 brew path: after `brew uninstall + untap`, if the flag was set,
  re-tap + re-install + rename. Verifies the restored `archon-stable`
  reports a version and warns (non-fatal) if the rename target is missing.
  Documents the tradeoff: the restored version is "whatever the tap ships
  today", not necessarily the pre-test version — usually that's what the
  operator wants (the release they just tested becomes stable) but the
  back-version-QA case requires a manual `brew-upgrade-archon` after.
- Phase 1 confirmation banner now mentions that `archon-stable` will be
  preserved so the operator isn't surprised by the reinstall during Phase 5.

No changes to curl-mac/curl-vps paths. No changes to Phase 4 test suite.

* fix(providers/pi): install PI_PACKAGE_DIR shim so Pi workflows run in a compiled binary (#1360)

v0.3.9 made Pi boot-safe: lazy-loading its imports meant `archon version`
no longer crashed on `@mariozechner/pi-coding-agent/dist/config.js`'s
module-init `readFileSync(getPackageJsonPath())`. That's what the
`provider-lazy-load.test.ts` regression test guards.

The fix was only half the problem though. When a Pi workflow actually
runs, sendQuery() triggers the dynamic import — and Pi's config.js
module-init fires then, hitting the exact same ENOENT on
`dirname(process.execPath)/package.json`. Discovered by running
`archon workflow run test-pi` against a locally-compiled 0.3.9 binary:

    [main] Failed: ENOENT: no such file or directory,
           open '/private/tmp/package.json'
        at readFileSync (unknown)
        at <anonymous> (/$bunfs/root/archon-providertest:184:7889)
        at init_config

Boot-safe ≠ runtime-safe. The `/test-release` run for 0.3.9 passed
because it only exercised `archon-assist` (Claude); Pi was never
actually invoked on the released binary.

Fix: before the dynamic `import('@mariozechner/pi-coding-agent')` in
sendQuery, install a PI_PACKAGE_DIR shim. Pi's config.js checks
`process.env.PI_PACKAGE_DIR` first in its `getPackageDir()` and
short-circuits the `dirname(process.execPath)` walk. We write a
minimal `{name, version, piConfig:{}}` stub to
`tmpdir()/archon-pi-shim/package.json` (idempotent — existsSync check)
and set the env var. Pi only reads `piConfig.name`, `piConfig.configDir`,
and `version` from that file, all optional, so the stub surface is
genuinely minimal.

Localized to PiProvider: no global state, no mutation of any shared
config, no upstream fork. Claude and Codex providers are unaffected
(their SDKs don't have this class of module-init side effect).

Verified end-to-end: built a compiled archon binary with this patch,
ran `archon workflow run test-pi --no-worktree` (Pi workflow with
model `anthropic/claude-haiku-4-5`), got a clean response. Before the
patch, same binary crashed at `dag_node_started` with the ENOENT above.

Regression test added: asserts `PI_PACKAGE_DIR` is set after sendQuery
hits even its fast-fail "no model" path. Together with the existing
`provider-lazy-load.test.ts` (boot-safe) this covers both halves.

* feat(providers): autodetect canonical binary install paths for Claude and Codex (#1361)

Both binary resolvers previously stopped at env-var + explicit config and
threw a "not found" error when neither was set. Users who followed the
upstream-recommended install flow (Anthropic's `curl install.sh` for
Claude, `npm install -g @openai/codex`) still had to manually set either
`CLAUDE_BIN_PATH` / `CODEX_BIN_PATH` or the corresponding config field
before any workflow could run.

Add a tier-N autodetect step between the explicit config tier and the
install-instructions throw. Purely additive: env and config still win
when set (precedence covered by new tests). On autodetect miss, the same
install-instructions error fires as before.

Claude probe list (verified against docs.claude.com "Uninstall Claude
Code → Native installation" section):
  - $HOME/.local/bin/claude            (mac/linux native installer)
  - $USERPROFILE\.local\bin\claude.exe (Windows native installer)

Codex probe list (verified against openai/codex README; npm global-
install puts the binary at `{npm_prefix}/bin/<name>` on POSIX,
`{npm_prefix}\<name>.cmd` on Windows):
  - $HOME/.npm-global/bin/codex   (user-set `npm config set prefix`)
  - /opt/homebrew/bin/codex       (mac arm64 with homebrew-node)
  - /usr/local/bin/codex          (mac intel / linux system node)
  - %APPDATA%\npm\codex.cmd       (Windows npm global default)
  - $HOME\.npm-global\codex.cmd   (Windows user-set prefix)

Not probed (explicit override still required):
  - Custom npm prefixes — `npm root -g` would need a subprocess per
    resolve, too much surface for a probe helper
  - `brew install --cask codex` — cask layout isn't a PATH binary
  - Manual GitHub Releases extracts — placement is user-determined
  - `~/.bun/bin/codex` — not documented in openai/codex README

Pi provider intentionally has no equivalent change: the Pi SDK is
bundled into the archon binary (no subprocess), so there's no "binary"
to resolve. Pi auth lives at `~/.pi/agent/auth.json` which the SDK
already finds by default, and the PR A shim (`PI_PACKAGE_DIR`) handles
the package-dir case via Pi's own documented escape hatch.

E2E verified: removed both config entries from ~/.archon/config.yaml,
rebuilt compiled binary, ran `archon workflow run archon-assist` and a
Codex workflow. Logs showed `source: 'autodetect'` for both, responses
returned cleanly.

* fix(providers/test): use os.homedir() instead of $HOME in claude binary autodetect test

The native-installer autodetect test computed its expected path from
process.env.HOME, but the implementation uses node:os homedir(). On
Windows, HOME is typically unset (Windows uses USERPROFILE), so the
test fell back to '/Users/test' while the resolver returned the real
home dir — making the spy's path-equality check fail and breaking CI
on windows-latest.

Mirror the implementation by importing homedir() from node:os and
joining with node:path so the expected path matches the actual
platform-resolved home and separator.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(server): contain Discord login failure so it doesn't kill the server (#1365)

Reported in #1365: a user running `archon serve` with DISCORD_BOT_TOKEN
set but the "Message Content Intent" toggle disabled in the Discord
Developer Portal saw the entire server crash with `Used disallowed
intents`. Discord rejects the gateway connection (close code 4014) when
a privileged intent is requested without being enabled, and the
unguarded `await discord.start()` propagated the error all the way up,
taking the web UI down with it.

Wrap discord.start() in try/catch — log the failure with an actionable
hint (special-cased for the disallowed-intent error) and continue
running. Other adapters and the web UI come up regardless. The shutdown
handler already uses optional chaining (`discord?.stop()`) so nulling
discord after a failed start is safe.

Other adapters (Telegram, Slack, GitHub, Gitea, GitLab) have the same
unguarded-start pattern but are out of scope for this fix — addressing
them is tracked separately.

Also expanded the Discord setup docs with a caution callout that names
the exact error string and the new log event so users can grep for
both.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* docs(script-nodes): dedicated guide + teach the archon skill (#1362)

* docs(script-nodes): add dedicated guide and teach the archon skill how to write them

Script nodes (script:) have been a first-class DAG node type since v0.3.3 but
were documented only as one-liners in CLAUDE.md and a CI smoke test. Claude
Code reading the archon skill would see "Four Node Types: command, prompt,
bash, loop" and reach for bash+node/python one-liners instead of a proper
script node — losing bun's --no-env-file isolation, uv's --with dependency
pins, and the .archon/scripts/ reuse story.

- New packages/docs-web/src/content/docs/guides/script-nodes.md mirroring the
  structure of loop-nodes.md / approval-nodes.md: schema, inline vs named
  dispatch, runtime/deps semantics, scripts directory precedence (repo > home),
  extension-runtime mapping, env isolation, stdout/stderr contract, patterns,
  and the explicit list of ignored AI fields.
- guides/authoring-workflows.md and guides/index.md updated so the new guide is
  discoverable from both the node-types table and the guides landing page.
- reference/variables.md calls out the no-shell-quote difference between
  bash: and script: substitution — a subtle correctness trap when adapting a
  bash pattern into a script node.
- Sidebar order bumped +1 on hooks/mcp-servers/skills/global-workflows/
  remotion-workflow to slot script-nodes at order 5 next to the other
  node-type guides.

- .claude/skills/archon/SKILL.md: replaces stale "Four Node Types" (which
  also silently omitted approval and cancel) with the accurate seven, with a
  script-node code block showing both inline and named patterns.
- references/workflow-dag.md: full Script Node section covering dispatch,
  resolution, deps, stdout contract, and the list of AI-only fields that are
  ignored; validation-rules list updated.
- references/dag-advanced.md and references/variables.md: retry-support line
  corrected; no-shell-quote note added.
- examples/dag-workflow.yaml: added an extract-labels TypeScript script node
  and updated the header comment.

* fix(docs): review follow-ups for script-node guide

- skills example: extract-labels was reading process.env.ISSUE_JSON which is
  never set; use String.raw`$fetch-issue.output` so the upstream bash node's
  JSON is actually consumed
- guides/script-nodes.md + skills/workflow-dag.md: idle_timeout is accepted
  but ignored on script (and bash) nodes — executeScriptNode only reads
  node.timeout. Clarify that script/bash use `timeout`, not idle_timeout
- archon-workflow-builder.yaml: prompt enumerated only bash/prompt/command/loop,
  so the AI builder could never propose script or approval nodes. Add both
  (plus examples + rule about script output not being shell-quoted) and
  regenerate bundled defaults
- book/dag-workflows.md + book/quick-reference.md + adapters/web.md: fill in
  the node-type references that were missing script, approval, and cancel.
  adapters/web.md also overclaimed "loop" in the palette — NodePalette.tsx
  only drags command/prompt/bash, so note that the other kinds are YAML-only

* docs/skill: general hardening — fix inaccuracies, fill workflow/CLI/env gaps, add good-practices + troubleshooting (#1363)

* fix(skill/when): document the full `when:` operator set and compound expressions

The skill reference previously stated "operators: ==, != only" which is
materially wrong — the condition evaluator supports ==, !=, <, >, <=, >=
plus && / || compound expressions with && binding tighter than ||, plus
dot-notation JSON field access. An agent authoring a workflow from the
skill would think half the operators don't exist.

Replaces the single-sentence section with a structured reference covering:
- All six comparison operators (string and numeric modes)
- Compound expressions with precedence rules and short-circuit eval
- JSON dot notation semantics and failure modes
- The fail-closed rules in full (invalid expression, non-numeric side,
  missing field, skipped upstream)

Grounded in packages/workflows/src/condition-evaluator.ts.

* feat(skill): document Approval and Cancel node types

Approval and cancel nodes are first-class DAG node types (approval since the
workflow lifecycle work in #871, cancel as a guarded-exit primitive) but the
skill never described either one. An agent reading the skill and asked to
"add a review gate before implementation" or "stop the workflow if the input
is unsafe" would fall back to bash + exit 1, losing the proper semantics
(cancelled vs. failed, on_reject AI rework, web UI auto-resume).

Approval node coverage (references/workflow-dag.md, SKILL.md):
- Full configuration block with message, capture_response, on_reject
- The interactive: true workflow-level requirement for web UI delivery
- Approve/reject commands across all platforms (CLI, slash, natural
  language) and the capture_response → $node-id.output flow
- Ignored-fields list + the on_reject.prompt AI sub-node exception

Cancel node coverage (references/workflow-dag.md, SKILL.md):
- Single-field schema (cancel: "<reason>")
- Lifecycle: cancelled (not failed); in-flight parallel nodes stopped;
  no DAG auto-resume path
- The "cancel: vs bash-exit-1" decision rule (expected precondition miss
  vs. check itself failing)
- Two canonical patterns — upstream-classification gate, pre-expensive-step
  gate

Validation-rules list updated to enumerate approval/cancel constraints
(message non-empty, on_reject.max_attempts range 1-10, cancel reason
non-empty), plus a forward note that script: joins the mutually-exclusive
set once PR #1362 lands.

Placement in both files is after the Loop section and before the validation
section, so this commit stays additive with respect to PR #1362's Script
node insertion between Bash and Loop — rebase is clean.

* feat(skill): document workflow-level fields beyond name/provider/model

The skill's Schema section previously showed only name, description, provider,
and model at the workflow level — which is most of a stub. Agents asked to
"use the 1M-context Claude beta" or "run this under a network sandbox" or
"add a fallback model in case Opus rate-limits" had no way to discover
that any of these fields existed at the workflow level.

Adds a comprehensive Workflow-Level Fields section covering:
- Core: name, description, provider, model, interactive (with explicit
  callout that interactive: true is REQUIRED for approval/loop gates on
  web UI — a common footgun)
- Isolation: worktree.enabled for pin-on/pin-off (the only worktree field
  at workflow level; baseBranch/copyFiles/path/initSubmodules are
  config.yaml only, so a cross-reference points there)
- Claude SDK advanced: effort, thinking, fallbackModel, betas, sandbox,
  with explicit per-node-only exceptions (maxBudgetUsd, systemPrompt)
- Codex-specific: modelReasoningEffort (with note that it's NOT the same
  as Claude's effort — this has confused users), webSearchMode,
  additionalDirectories
- A complete worked example combining sandbox + approval + interactive

All fields cross-referenced against packages/workflows/src/schemas/workflow.ts
and packages/workflows/src/schemas/dag-node.ts.

* feat(skill/loop): document interactive loops and gate_message

Interactive loop nodes pause between iterations for human feedback via
/workflow approve — used by archon-piv-loop and archon-interactive-prd.
The skill's Loop Nodes section previously omitted both interactive: true
and gate_message entirely, so an agent writing a guided-refinement
workflow wouldn't know the feature exists or that gate_message is
required at parse time.

Adds:
- interactive and gate_message rows to the config table (marking
  gate_message as required when interactive: true — enforced by the
  loader's superRefine)
- A dedicated "Interactive Loops" subsection explaining the 6-step
  iterate-pause-approve-resume flow
- Explicit call-out that $LOOP_USER_INPUT populates ONLY on the first
  iteration of a resumed session — easy to miss and a common surprise
- Workflow-level interactive: true requirement for web UI delivery
  (loader warning otherwise) so the full-flow example is complete
- Note that until_bash substitution DOES shell-quote $nodeId.output
  (unlike script bodies) — called out since the audit surfaced this
  inconsistency

* fix(skill/cli): complete the CLI command reference with missing lifecycle commands

The CLI reference previously documented only list, run, cleanup, validate,
complete, version, setup, and chat — missing nearly every workflow
lifecycle command an agent needs to operate a paused, failed, or stuck
run. The interactive-workflows reference assumed these commands existed
without actually documenting them.

Adds full documentation for:
- archon workflow status — show running workflow(s)
- archon workflow approve <run-id> [comment] — resume approval gate
  (also populates $LOOP_USER_INPUT on interactive loops and the gate
  node's output when capture_response: true)
- archon workflow reject <run-id> [reason] — reject gate; cancels or
  triggers on_reject rework depending on node config
- archon workflow cancel <run-id> — terminate running/paused with
  in-flight subprocess kill
- archon workflow abandon <run-id> — mark stuck row cancelled without
  subprocess kill (for orphan-cleanup after server crashes — matches
  the #1216 precedent)
- archon workflow resume <run-id> [message] — force-resume specific
  run (auto-resume is default; this is for explicit override)
- archon workflow cleanup [days] — disk hygiene for old terminal runs
  (with explicit callout that it does NOT transition 'running' rows,
  a common confusion)
- archon workflow event emit — used inside loop prompts for state
  signalling; documented so agents don't invent their own mechanism
- archon continue <branch> [flags] [msg] — iterative-session entry
  point with --workflow and --no-context flags

Also:
- Adds --allow-env-keys flag to the `workflow run` flag table with
  audit-log context and the env-leak-gate remediation use case
- Adds an "Auto-resume without --resume" note disambiguating when
  --resume is needed vs. when auto-resume handles it
- Adds --include-closed flag to `isolation cleanup`, which was
  previously missing; converts the flag list to a structured table
- Explains the cancel/abandon distinction (live subprocess vs. orphan)

All grounded in packages/cli/src/commands/workflow.ts, continue.ts,
and isolation.ts.

* feat(skill/repo-init): add scripts/ and state/, three-path env model, per-project env injection

The repo-init reference was missing two first-class .archon/ directories
(scripts/ since v0.3.3, state/ since the workflow-state feature) and had
nothing to say about env — the #1 thing a user hits on first-run when
their repo has a .env file with API keys.

Directory tree updates:
- Adds .archon/scripts/ with the extension->runtime rule (.ts/.js -> bun,
  .py -> uv) so agents know where to put named scripts referenced by
  script: nodes.
- Adds .archon/state/ with explicit "always gitignore" callout — these
  are runtime artifacts, not source. Previously undocumented in the skill.
- Adds .archon/.env (repo-scoped Archon env) and distinguishes it from
  the target repo's top-level .env.
- Adds a "What each directory is for" list so the structure isn't just
  a tree with no narrative.

.gitignore guidance:
- state/ and .env added as must-gitignore (state/ matches CLAUDE.md and
  reference/archon-directories.md — skill was lagging).
- mcp/ demoted to conditional — gitignore only if you hardcode secrets.

New "Three-Path Env Model" section:
- ~/.archon/.env (trusted, user), <cwd>/.archon/.env (trusted, repo),
  <cwd>/.env (UNTRUSTED, target project — stripped from subprocess env).
- Precedence (override: true across archon-owned paths) and the
  observable [archon] loaded N keys / stripped K keys log lines so
  operators can verify what actually happened.
- Decision tree for where to put API keys vs. target-project env vs.
  things Archon shouldn't touch.
- Links to archon setup --scope home|project with --force for writing
  to the right file with timestamped backups.

New "Per-Project Env Injection" section:
- Documents both managed surfaces: .archon/config.yaml env: block
  (git-committed, $REF expansion) and Web UI Settings → Projects →
  Env Vars (DB-stored, never returned over API).
- Names every execution surface that receives the injected vars:
  Claude/Codex/Pi subprocess, bash: nodes, script: nodes, and direct
  codebase-scoped chat.
- Documents the env-leak gate with all 5 remediation paths so an agent
  hitting "Cannot register: env has sensitive keys" knows the options.

Grounded in CHANGELOG v0.3.7 (three-path env + setup flags), v0.3.0
(env-leak gate), and reference/security.md on the docs site.

* fix(skill/authoring-commands): correct override paths and add home-scoped commands

The file-location and discovery sections described an override layout that
does not match the actual resolver. It showed:

  .archon/commands/defaults/archon-assist.md  # Overrides the bundled

and claimed `.archon/commands/defaults/` was where repo-level overrides
lived. In fact the resolver (executor-shared.ts:152-200 + command-
validation.ts) walks `.archon/commands/` 1 level deep and uses basename
matching — putting `archon-assist.md` at the top of `.archon/commands/`
is the canonical way to override the bundled version. The `defaults/`
subfolder is a Archon-internal convention for shipping bundled defaults,
not a user-facing override pattern.

Also, home-scoped commands (`~/.archon/commands/`, shipped in v0.3.7)
were completely absent — agents authoring personal helpers wouldn't
know they could live at the user level and be shared across every repo.

Changes:
- File Location section now shows all three discovery scopes (repo,
  home, bundled) with precedence ordering and 1-level subfolder rules
- Duplicate-basename rule documented as a user error surface
- Discovery and Priority section rewritten with accurate 3-step lookup
  order — no more references to the nonexistent defaults/ override path
- Adds the Web UI "Global (~/.archon/commands/)" palette label note so
  users authoring helpers for the builder know what to expect

No code changes — this is a pure fix of stale/incorrect skill reference
material.

* feat(skill): add workflow good-practices and troubleshooting reference pages

Closes two gaps from the audit. The skill previously had zero guidance on
designing multi-node workflows (what to avoid, what to reach for first,
how to structure artifact chains) and zero guidance on where to look
when things go wrong (log paths, env-leak gate remediations, orphan-row
cleanup, resume semantics).

New references/good-practices.md (9 Good Practices + 7 Anti-Patterns):

- Use deterministic nodes (bash:/script:) for deterministic work, AI for
  reasoning — the single biggest quality lever
- output_format required whenever downstream when: reads a field — the
  most common source of "workflow silently routes wrong"
- trigger_rule: none_failed_min_one_success after conditional branches —
  the classic bug where all_success fails because a skipped when:-gated
  branch doesn't count as a success
- context: fresh requires artifacts for state passing — commands must
  explicitly "read $ARTIFACTS_DIR/..." when downstream of fresh
- Cheap models (haiku) for glue, strong for substance
- Workflow descriptions as routing affordances
- Validate (archon validate workflows) + smoke-run before shipping
- Artifact-chain-first design
- worktree.enabled: true for code-changing workflows (reversibility)
- Anti-patterns with before/after YAML examples for each (AI-for-tests,
  free-form when: matching, context: fresh without artifacts, long flat
  AI-node layers, secrets in YAML, retry on loop nodes, tiny
  max_iterations, missing workflow-level interactive:, tool-restricted
  MCP nodes)

New references/troubleshooting.md:

- Log location (~/.archon/workspaces/<owner>/<repo>/logs/<run-id>.jsonl)
  with jq recipes for common queries (last assistant message, failed
  events, full stream)
- Artifact location for cross-node handoff debugging
- 9 Common Failure Modes, each with root cause + concrete fix:
  - $BASE_BRANCH unresolvable
  - Env-leak gate (5 remediations)
  - Claude/Codex binary not found (compiled-binary-only)
  - "running" forever (AI working / orphan / idle_timeout)
  - Mid-workflow failure and auto-resume semantics
  - Approval gate missing on web UI (workflow-level interactive:)
  - MCP plugin connection noise (filtered by design)
  - Empty $nodeId.output / field access (4 causes)
- Diagnostic command cheat sheet (list, status, isolation list, validate,
  tail-log, --verbose, LOG_LEVEL=debug)
- Escalation protocol (version + validate + log tail + CHANGELOG + issue)

SKILL.md routing table now dispatches "Workflow good practices /
anti-patterns" and "Troubleshoot a failing / stuck workflow" to the new
references so an agent can find them without having to know they exist.

* docs(book): update node-types coverage from four to all seven

The book is the curated first-contact reading path (landing page → "Get
Started" → /book/). Both dag-workflows.md and quick-reference.md were
stuck on "four node types" — missing script, approval, and cancel. A user
reading the book as their first introduction would form an incomplete
mental model, then find three more node types in the reference section
later with no explanation of when they arrived.

book/dag-workflows.md:
- "four node types" → "seven node types. Exactly one mode field is
  required per node"
- Table now lists Command, Prompt, Bash, Script, Loop, Approval, Cancel
  with one-line "when to use" for each, and cross-links to the dedicated
  guide pages for Script / Loop / Approval
- New sections below the table for Script (inline + named examples with
  runtime and deps), Approval (with the interactive: true workflow-level
  note that's easy to miss), and Cancel (guarded-exit pattern) — keeping
  the existing narrative shape for Bash and Loop

book/quick-reference.md:
- Node Options table now includes script, approval, cancel rows
- agents row added (inline sub-agents, Claude-only)
- New "Script-specific fields" and "Approval-specific fields" subsections
  so the cheat-sheet is actually complete rather than pointing users
  elsewhere for the required constraints
- Retry row callout that loop nodes hard-error on retry — previously
  omitted
- bash timeout note widened to cover script timeout (same semantics)

Both files are docs-web content; the CI build on the docs-script-nodes
PR (#1362) previously validated the Starlight build path with a similar
table addition, so this should render clean.

* fix(skill/cli): remove nonexistent \`archon workflow cancel\`, fix workflow status jq recipe

Two accuracy issues from the PR code-reviewer (comment 4311243858).

C1: \`archon workflow cancel <run-id>\` does NOT exist as a CLI subcommand.
The switch at packages/cli/src/cli.ts:318-485 dispatches on list / run /
status / resume / abandon / approve / reject / cleanup / event — running
\`archon workflow cancel\` hits the default case and exits with "Unknown
workflow subcommand: cancel" (cli.ts:478-484). Active cancellation is
only available via:
  - /workflow cancel <run-id> chat slash command (all platforms)
  - Cancel button on the Web UI dashboard
  - POST /api/workflows/runs/{runId}/cancel REST endpoint

cli-commands.md: removed the \`### archon workflow cancel <run-id>\`
subsection; kept the \`abandon\` subsection but made it explicit that
abandon does NOT kill a subprocess. Added a call-out box at the bottom
of the abandon section explaining where to go for actual cancellation.

troubleshooting.md "running forever" section: split the original
cancel-vs-abandon advice into three bullets — Web UI / CLI abandon (for
orphans, no subprocess kill) / chat \`/workflow cancel\` (for live runs
that need interruption). Added an explicit "there is no archon workflow
cancel CLI subcommand" parenthetical since the wrong command was being
suggested in flow.

I1: the \`archon workflow list --json\` diagnostic used an incorrect jq
filter. workflow list's --json output (workflow.ts:185-219) has shape
{ workflows: [{ name, description, provider?, model?, ... }], errors: [...] }
with no \`runs\` field — \`jq '.workflows[] | select(.runs)'\` returns empty
unconditionally. Replaced with \`archon workflow status --json | jq '.runs[]'\`,
which matches the actual shape of workflowStatusCommand at
workflow.ts:852+ ({ runs: WorkflowRun[] }). Also tightened the narration
to distinguish JSON from human-readable status output.

No change to the commit history in this PR — these are follow-up fixes
to claims I introduced in earlier commits of this branch (f10b989e for
C1, 66d2b86e for I1).

* fix(skill): remove env-leak gate references (feature was removed in provider extraction)

C2 from the PR code-reviewer (comment 4311243858). The pre-spawn env-leak
gate was removed from the codebase during the provider-extraction refactor
— see TODO(#1135) at packages/providers/src/claude/provider.ts:908. Zero
hits for --allow-env-keys / allowEnvKeys / allow_env_keys / allow_target_repo_keys
across packages/. The CLI's parseArgs (cli.ts:182-208) has no
--allow-env-keys option, and because parseArgs uses strict: false, an
unknown --allow-env-keys would be silently ignored rather than error.

What remains accurate and is NOT touched:
- Three-Path Env Model section (user/repo archon-owned envs are loaded;
  target repo <cwd>/.env keys are stripped from process.env at boot)
  still correctly describes current behavior, grounded in
  packages/paths/src/strip-cwd-env.ts + env-integration.test.ts
- Per-Project Env Injection section (Option 1: .archon/config.yaml env:
  block; Option 2: Web UI Settings → Projects → Env Vars) is unchanged —
  both remain the sanctioned way to get env vars into subprocesses

Removed claims (all three files):
- cli-commands.md: --allow-env-keys flag row in the workflow run flags
  table
- repo-init.md: the "Env-leak gate" subsection at the end of Per-Project
  Env Injection listing 5 remediations (all of which reference UI/CLI/
  config surfaces that don't exist). Replaced with a succinct callout
  that explains the actual current behavior — target repo .env keys are
  stripped, workflows that need those values should use managed
  injection — so the reader still gets the "where to put my env vars"
  answer
- troubleshooting.md: the "Cannot register: codebase has sensitive env
  keys" section (error message that can no longer be emitted)

If the env-leak gate is ever resurrected per TODO(#1135), the docs can be
re-added then. The CHANGELOG v0.3.0 entry describing the gate is a
historical record of past behavior and does not need to be rewritten.

* fix(skill/troubleshooting): correct JSONL event type names and field name

C3 from the PR code-reviewer (comment 4311243858). The troubleshooting
reference's event-types table used _started / _completed / _failed
suffixes, but packages/workflows/src/logger.ts:19-30 shows the actual
WorkflowEvent.type enum is:

  workflow_start | workflow_complete | workflow_error |
  assistant | tool | validation |
  node_start | node_complete | node_skipped | node_error

The second jq recipe also queried `.event` but the discriminator is `.type`.

Fixes:
- Event table: renamed columns (_started → _start, _completed → _complete,
  _failed → _error). Explicitly called out the field name as `type` so the
  reader knows what jq selector to use
- Replaced the "tool_use / tool_result" row with a single `tool` row and
  listed its actual payload fields (tool_name, tool_input, duration_ms,
  tokens) — tool_use/tool_result are SDK message kinds that appear within
  the AI stream, not top-level log event types
- Added a `validation` row (was missing; it's emitted by workflow-level
  validation calls with `check` and `result` fields)
- Removed `retry_attempt` row — this event type is not emitted to the
  JSONL file. Retry bookkeeping goes through pino logs, not the workflow
  log file
- Added an explicit callout that loop_iteration_started /
  loop_iteration_completed (and other emitter-only events) go through
  the workflow event emitter + DB workflow_events table, NOT the JSONL
  file. Pointed readers to the DB or Web UI for loop-level detail. This
  distinguishes the two parallel event systems — easy to conflate
  (store.ts:11-17 uses _started/_completed/_failed for the DB side,
  logger.ts uses _start/_complete/_error for JSONL)
- Fixed the "all failed events" jq recipe: .event → .type and _failed → _error
- Minor cleanup: the inline "tool_use events" mention in the "running
  forever" section said the wrong event name — updated to "tool or
  assistant events in the tail"

Grounded in packages/workflows/src/logger.ts (canonical JSONL event
shape) and packages/workflows/src/store.ts (the parallel DB event
naming, which the reviewer correctly flagged as different and worth
keeping distinct).

* fix(skill): two stragglers from the code-reviewer audit

Cleanup of two references that slipped through the earlier C1 and C3 fixes:

- references/troubleshooting.md:126: \`node_failed\` → \`node_error\`
  (the "Node output is empty" diagnostics section references the JSONL
  log, which uses the logger.ts enum — not the DB workflow_events table
  which does use \`node_failed\`). The C3 fix corrected the event table
  and one jq recipe but missed this inline mention.

- references/interactive-workflows.md:106: removed \`archon workflow
  cancel <run-id>\` (nonexistent CLI subcommand) from the
  troubleshooting bullet. This was pre-existing before the hardening
  PR but fell within the C1 remediation scope. Replaced with the
  correct triage: reject (approval gate only) vs abandon (orphan
  cleanup, no subprocess kill) vs chat /workflow cancel (actual
  subprocess termination).

Grounded in the same sources as the earlier C1/C3 commits:
packages/cli/src/cli.ts:318-485 (no cancel case) and
packages/workflows/src/logger.ts:19-30 (JSONL type enum).

* feat(skill): point to archon.diy as the canonical docs source

The skill had no reference to archon.diy (the live docs site built from
packages/docs-web/). Several reference files said "see the docs site"
without naming the URL, leaving the agent to guess or grep the repo for
the hostname. An agent with the skill loaded should know that when the
distilled reference pages don't cover a case, the full canonical docs
are one WebFetch away.

SKILL.md: new "Richer Context: archon.diy" section between Routing and
Running Workflows. Covers:
- When to reach for the live docs (longer examples, tutorial framing,
  features the skill only mentions in passing, "where's that
  documented?" user questions)
- URL map — 13 starting points covering getting-started, book (tutorial
  series), guides/ (authoring + per-node-type + per-node-feature),
  reference/ (variables, CLI, security, architecture, configuration,
  troubleshooting), adapters/, deployment/
- Precedence: skill refs first (context-cheap, tuned for agents), docs
  site as escalation. Prevents agents defaulting to WebFetch when a
  local skill ref already covers the answer

Also upgrades the 5 existing generic "docs site" mentions across
reference files to concrete archon.diy URLs with anchor fragments where
helpful:
- good-practices.md: Inline sub-agents pattern → archon.diy/guides/
  authoring-workflows/#inline-sub-agents
- troubleshooting.md: "Install page on the docs site" → archon.diy/
  getting-started/installation/
- workflow-dag.md: "Workflow Description Best Practices" → anchor link;
  sandbox schema reference → archon.diy/guides/authoring-workflows/
  #claude-sdk-advanced-options
- repo-init.md: Security Model reference → archon.diy/reference/
  security/#target-repo-env-isolation (deep-link into the section that
  covers the <cwd>/.env strip behavior)

URL source of truth: astro.config.mjs:5 (site: 'https://archon.diy').
URL structure mirrors packages/docs-web/src/content/docs/<section>/
<page>.md — verified by the 62 pages the docs build produces.

* chore(workflows): switch default Opus pin to opus[1m] alias (#1395)

Anthropic's Opus 4.7 landed 2026-04-16; on the Anthropic API, opus /
opus[1m] now resolve to 4.7 with a 1M context window at standard
pricing. Using the alias instead of the hard-pinned claude-opus-4-6[1m]
lets bundled default workflows auto-track the recommended Opus version.

No explicit effort is set, so nodes inherit the per-model default
(xhigh on 4.7, high on 4.6).

* fix(workflow): migrate piv-loop plan handoff to $ARTIFACTS_DIR (#1398)

* fix(workflow): migrate piv-loop plan handoff to $ARTIFACTS_DIR (#1380)

The create-plan node used a relative path (.claude/archon/plans/{slug}.plan.md)
that the AI agent would sometimes write to a different location, breaking all
downstream nodes that glob for the plan file. Migrated all plan/progress file
references to $ARTIFACTS_DIR/plan.md and $ARTIFACTS_DIR/progress.txt, matching
the pattern used by archon-fix-github-issue and other workflows.

Changes:
- Replace slug-based plan path with $ARTIFACTS_DIR/plan.md in create-plan node
- Replace ls -t glob discovery with direct $ARTIFACTS_DIR/plan.md reads in
  refine-plan, code-review, and fix-feedback nodes
- Replace empty-string guard with file-existence check in implement-setup bash
- Migrate progress.txt references in implement loop to $ARTIFACTS_DIR/
- Add explicit plan/progress paths in finalize node
- Regenerated bundled-defaults.generated.ts

Fixes #1380

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(workflow): address review findings in archon-piv-loop

- Rename 'Step 2: Write the Plan' to 'Step 2: Plan File Location' to
  eliminate the duplicate heading that collided with Step 3's identical
  title in the create-plan node
- Guard implement-setup against a 0-task plan file: exit 1 with a
  clear error when no '### Task N:' sections are found, preventing a
  silent no-op implement loop
- Remove 2>/dev/null from code-review commit so pre-commit hook failures
  and other stderr are visible to the agent instead of silently swallowed
- Replace '|| true' on git push in finalize with an explicit WARNING echo
  so push failures (auth, upstream conflict, no remote) surface to the
  agent rather than being silently ignored
- Regenerate bundled-defaults.generated.ts

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* chore(workflows): regenerate bundled defaults to match opus[1m] alias

The bundle was stale relative to the YAML sources after #1395 merged —
check:bundled was failing CI. Regenerated; no YAML edits.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test(workflows): add anyFailed status derivation coverage for DAG executor (#1403)

PIV Task 1: Adds three new tests in a dedicated describe block
'executeDagWorkflow -- final status derivation' covering the anyFailed
branch (dag-executor.ts ~line 2956) that previously had no direct test:
- one success + one independent failure calls failWorkflowRun (not completeWorkflowRun)
- multiple successes + one failure calls failWorkflowRun (not completeWorkflowRun)
- trigger_rule: none_failed skips dependent node but anyFailed still marks run failed

Fixes #1381.

* docs/skill: add parameter-matrix.md quick-lookup reference

New reference for the archon skill: a single-glance lookup of which
parameter works on which node type, an intent-based "how do I..." table,
a consolidated silent-failure catalog, and an inline agents: section
(previously only referenced via archon.diy).

Purpose is complementary, not duplicative:
- workflow-dag.md remains the authoring guide
- dag-advanced.md remains the hooks/MCP/skills/retry deep-dive
- good-practices.md remains the patterns and anti-patterns
- parameter-matrix.md is the grep-this-first lookup when you know the
  outcome you want but not which field gets you there

Also registers the new reference in SKILL.md routing table.

* docs: point contributors at PR template and Closes #N convention

Add explicit references to .github/PULL_REQUEST_TEMPLATE.md in both
CONTRIBUTING.md and CLAUDE.md, plus a reminder to link issues with
Closes/Fixes/Resolves so they auto-close on merge. Repo-triage runs
were flagging dozens of partially-filled or unlinked PRs each cycle.

* feat(workflows): add maintainer-standup workflow for daily PR/issue triage (#1428)

* feat(workflows): add maintainer-standup workflow for daily PR/issue triage

Daily morning briefing that pulls origin/dev, triages all open PRs and assigned
issues against direction.md, and surfaces progress vs. the previous run. Designed
for live-checkout use (worktree.enabled: false) so it can read its own state.

Layout under .archon/maintainer-standup/:
  - direction.md (committed) — project north-star: what Archon IS / IS NOT.
    Drives PR P4 polite-decline classification with cited clauses.
  - README.md / profile.md.example — setup docs and template for new maintainers.
  - profile.md, state.json, briefs/YYYY-MM-DD.md — gitignored, per-maintainer.

Engine:
  - 3 parallel gather scripts in .archon/scripts/maintainer-standup-*.ts
    (git-status, gh-data, read-context) — bun runtime, JSON stdout.
  - Synthesis node: command file with output_format schema for
    { brief_markdown, next_state }.
  - Persist node: tiny inline bun script writes both to disk.

Run-to-run continuity: state.json carries observed_prs/issues snapshots, so the
next run can detect what merged, what closed, what the maintainer shipped, and
which carry-over items aged past N days.

Also adds .archon/** to the ESLint global ignore list (matches the existing
.claude/skills/** pattern) since .archon/ is user content and not part of any
tsconfig project.

* fix(maintainer-standup): address CodeRabbit review on #1428

- gh-data: bump --limit 100 → 1000 on all_open_prs and warn loudly when
  the cap is hit; preserves the observed_prs invariant the next-run
  "resolved since last run" diff depends on. (CodeRabbit critical)
- maintainer-standup.md: clarify P1 CI signal — the gathered payload only
  carries mergeStateStatus, not statusCheckRollup; for borderline P1s,
  drill in via `gh pr checks <n>`. (CodeRabbit minor)
- workflow.yaml persist: write briefs under local YYYY-MM-DD (sv-SE
  locale) instead of UTC ISO date, so an evening run doesn't file
  tomorrow's brief and break recent_briefs lookups. (CodeRabbit minor)
- workflow.yaml persist: wrap state/brief writes in try/catch; on
  failure dump brief_markdown and next_state to stderr so a 5-minute
  Sonnet synthesis isn't lost to a transient disk error. (CodeRabbit minor)
- gh-data + git-status: switch from execSync (shell-string) to
  execFileSync (argv array) for git/gh invocations. Defense-in-depth
  against shell metacharacters in values that pass through (esp. the
  gh_handle from profile.md). (CodeRabbit nitpick)

* feat(workflows): support explicit tags in workflow YAML (#1190)

Add optional `tags: string[]` to `workflowBaseSchema`. Explicit values take precedence over keyword inference; `tags: []` suppresses inference end-to-end; omitting the field falls back to inference (backwards compatible). Non-array values warn-and-ignore matching the sibling `worktree`/`additionalDirectories` patterns.

* feat(workflows): add maintainer-review-pr and group maintainer workflows under maintainer/ (#1430)

* feat(workflows): add maintainer-review-pr and group maintainer workflows under .archon/workflows/maintainer/

Adds the maintainer-review-pr workflow — a Pi/Minimax-based PR triage
flow that gates on direction alignment, scope focus, and PR-template
quality before doing any deep review. If the gate clears, runs the
five review aspects (code/error-handling/test-coverage/comment-quality/
docs-impact) as parallel Archon nodes and auto-posts a synthesized
review comment. If the gate fails (direction conflict, multiple
concerns, sprawling scope), drafts a polite-decline comment and pauses
for the maintainer's approval before posting.

Reorganizes the existing maintainer-standup workflow into the same
subfolder so all maintainer-facing workflows live together. Subfolder
grouping is supported by the workflow loader (1 level deep, resolution
by filename).

What lands:

- .archon/workflows/maintainer/maintainer-standup.yaml (moved from
  .archon/workflows/maintainer-standup.yaml)
- .archon/workflows/maintainer/maintainer-review-pr.yaml (new)
- .archon/commands/maintainer-review-{gate,code-review,error-handling,
  test-coverage,comment-quality,docs-impact,synthesize,report}.md (new,
  Pi-tuned variants of the existing review-agent commands so they avoid
  Claude-only Task / sub-agent patterns)

Pi/Minimax integration:

- Uses provider: pi, model: minimax/MiniMax-M2.7 — verified via the
  e2e-minimax-smoke test that Pi correctly routes to Minimax (session
  jsonl confirms provider=minimax) and that Pi's best-effort
  output_format parser handles the gate's nested schema.
- Two test runs landed real comments: a direction-decline on PR #1335
  and a deep-review on PR #1369. Both were posted to GitHub via the
  workflow's gh pr comment node.

* chore(workflows): also group repo-triage under .archon/workflows/maintainer/

repo-triage is the third maintainer-facing workflow alongside maintainer-standup and maintainer-review-pr; group it in the same subfolder for consistency. Subfolder resolution is by filename so the workflow name is unchanged.

* feat(pi): use ModelRegistry to support custom models and skip auth for unmapped providers (#1284)

Closes #1096.

- Switch Pi provider model lookup from pi-ai's getModel() (static catalog
  only) to ModelRegistry.create(authStorage).find() so user-configured
  custom models in ~/.pi/agent/models.json (LM Studio, ollama, llamacpp,
  custom OpenAI-compatible endpoints) are discoverable.
- Remove the local lookupPiModel helper.
- For env-var-mapped providers (anthropic, openai, etc.) still throw
  with a pi /login hint when credentials are missing. For unmapped
  providers, log pi.auth_missing at info and continue so local models
  that don't need credentials work without ceremony.
- Surface modelRegistry.getError() in the not-found message and emit
  pi.model_not_found so users debugging custom-provider configs see the
  real cause (e.g. missing baseUrl in models.json).
- Guard AuthStorage.create() and ModelRegistry.create() with try/catch
  so a malformed ~/.pi/agent/auth.json surfaces with Pi-framed context
  instead of a raw SDK stack trace.
- Document the credential-free path for local providers in ai-assistants.md.

Co-authored-by: Matt Chapman <Matt@NinjitsuWeb.com>

* chore(workflows): group smoke-test workflows under test-workflows/ + add e2e-minimax-smoke (#1431)

* chore(workflows): group all smoke-test workflows under .archon/workflows/test-workflows/

Move the 7 existing e2e-*.yaml smoke tests plus the new e2e-minimax-smoke
test into a dedicated subfolder. Subfolder grouping is supported by the
workflow loader (1 level deep, resolution by filename) so workflow names
are unchanged. Mirrors the .archon/workflows/maintainer/ split landing
in #1430.

Also adds e2e-minimax-smoke.yaml — a sanity check that Pi correctly
routes to Minimax M2.7 via the user's local pi auth, and that Pi's
best-effort output_format parser handles a small nested schema. Asserts
routing by reading the most recent Pi session jsonl rather than asking
the model to self-identify (LLMs are unreliable narrators about their
own identity, especially when Pi's system prompt mentions other
providers as defaults).

* fix(e2e-minimax-smoke): address CodeRabbit review on #1431

- Widen find window from -mmin -3 to -mmin -10. The smoke's three Pi
  nodes plus the assert can collectively run several minutes on slow
  networks; 3 minutes was tight enough to false-FAIL on a healthy run.
  (CodeRabbit minor)
- Drop non-deterministic `head -1` over `find` output. find doesn't
  guarantee any order; on a tie, the wrong file would be picked. Now
  iterates all matching sessions and breaks on first one carrying the
  routing signal — any match is sufficient evidence. (CodeRabbit minor)
- Replace single-regex `'"provider":"minimax".*"modelId":"MiniMax-M2.7"'`
  with two separate greps joined by `&&`. JSON field order isn't part of
  Pi's contract; a future Pi release reordering `provider` and `modelId`
  in the model_change event would silently false-FAIL the original
  pattern. The new check is order-independent. (CodeRabbit major)

* fix(maintainer-review): address CodeRabbit findings on #1430 (#1432)

Six findings, two majors and four minors/nitpicks:

- gate.md L17 vs L77: resolved conflicting input-source instructions.
  Body claimed "all inline, no extra fetch" while a later phase
  permitted reading PULL_REQUEST_TEMPLATE.md. Now: explicit "one
  allowed extra read" callout in Phase 1 + matching wording in Gate C.
  (CodeRabbit major)

- gate.md fenced blocks: added missing language identifiers (text/json/
  markdown) to satisfy markdownlint MD040. (CodeRabbit minor)

- gate.md L155 + read-context.ts: deterministic clock. The 3-day deadline
  was anchored to prior_state.last_run_at, which can be stale and produce
  past-dated deadlines. Moved both today and deadline_3d into the
  read-context.ts output (computed via sv-SE locale → ISO date in local
  time) and instructed the gate to use $read-context.output.deadline_3d
  directly. LLMs are unreliable at calendar arithmetic; this avoids it
  entirely. (CodeRabbit major)

- maintainer-review-pr.yaml fetch-diff: dropped 2>/dev/null on gh pr diff
  so auth / network / deleted-PR failures fail the node instead of
  feeding an empty diff to the gate. Empty-but-successful diff (PR has
  no changes) is now an explicit marker the gate can detect. (CodeRabbit
  minor)

- maintainer-review-pr.yaml approve-unclear: added capture_response: true
  so the maintainer's approve comment flows to the report node. Reject
  reasoning is already captured by Archon's run record. (CodeRabbit
  minor)

- maintainer-review-pr.yaml post-decline + report.md: the gh pr edit
  --add-label call previously swallowed all errors with || true and the
  report still claimed the label was applied. Now writes applied/skipped
  to $ARTIFACTS_DIR/.label-applied + the gh stderr to .label-error so
  the report can describe the actual outcome. (CodeRabbit nitpick)

* fix(workflows): approval gate bypass after reject-with-redraft on resume (#1435)

* fix(workflows): approval gate bypass after reject-with-redraft on resume

When an approval node was rejected with on_reject.prompt, the synthetic
PromptNode built to run the on_reject prompt reused the approval gate's
own node ID. executeNodeInternal then wrote a node_completed event with
that ID, causing getCompletedDagNodeOutputs to treat the gate as already
completed on the next resume — bypassing the human gate entirely.

Fix: give the synthetic node the ID `${node.id}:on_reject` so its
node_completed event has a distinct step_name that won't match the
approval gate slot in priorCompletedNodes.

Adds a regression test asserting no node_completed event with the
approval gate's ID is written during on_reject execution.

Fixes #1429

* test(workflows): add positive assertion and SSE side-effect comment for on_reject synthetic node

Add complementary positive assertion to the regression test to verify that
node_completed is written exactly once with step_name 'review:on_reject',
ensuring future refactors that suppress the event entirely would be caught.

Add inline comment in executeApprovalNode documenting the known SSE side-effect:
node_started/node_completed events with nodeId='review:on_reject' flow through
the SSE pipeline into the web UI, resulting in a transient phantom node in the
execution view. This is cosmetic-only — the human gate contract is preserved.

* simplify: reduce duplicate cast pattern in on_reject test assertions

* feat(workflows): add mutates_checkout to allow concurrent runs on live checkout (#1438)

* feat(workflows): add mutates_checkout field to skip path-lock for concurrent runs

Add `mutates_checkout: boolean` (optional, default true) to the workflow
schema. When set to false, the executor skips the path-exclusive lock
that serializes all runs on the same working path, allowing N concurrent
runs on the same live checkout.

The primary use case is `maintainer-review-pr`, which reads shared state
but writes only to per-run artifact paths and GitHub PR comments — two
parallel reviews of different PRs should not fail with "Workflow already
active on this path".

Changes:
- `schemas/workflow.ts`: add optional `mutates_checkout` field
- `loader.ts`: parse and propagate the field (warn-and-ignore on invalid values)
- `executor.ts`: wrap path-lock guard in `if (workflow.mutates_checkout !== false)`
- `executor.test.ts`: two new tests in the concurrent-run guard suite
- `maintainer-review-pr.yaml`: opt in with `mutates_checkout: false`

* test(workflows): add loader tests for mutates_checkout parsing

- Add 5 tests covering false, true, omitted, and invalid (string "yes") values
- Invalid non-boolean values are silently dropped with warn — now explicitly tested
- Remove the // end mutates_checkout guard trailing comment (no precedent in file)
- Clarify loader comment: "parse/warn pattern" not "warn-and-ignore pattern" to avoid implying the return style matches interactive

* simplify: collapse nodeType/aiFields pair into single nonAiNode object in parseDagNode

* docs: replace String.raw with direct assignment in script node examples (#1434)

* docs: replace String.raw with direct assignment in script node examples

String.raw`$nodeId.output` fails silently when substituted output contains
a backtick, terminating the template literal early and producing cryptic parse
errors. JSON is valid JS expression syntax, so direct assignment is safe for
all valid JSON values including those with backticks.

- Replace String.raw pattern in dag-workflow.yaml example
- Replace String.raw pattern in archon-workflow-builder.yaml template
- Add CAUTION bullet in workflow-dag.md Script Node section
- Add Silent Failures item #14 in parameter-matrix.md
- Add Starlight caution aside in script-nodes.md
- Extend script bodies bullet in variables.md
- Regenerate bundled-defaults.generated.ts

Fixes #1427

* docs: fix Rule 6 in generate-yaml prompt to distinguish bun vs uv patterns

Rule 6 still referenced JSON.parse after the example was updated to direct
assignment, creating a contradiction for the AI code generator. Update the
prose to explicitly distinguish TypeScript/bun (direct assignment) from
Python/uv (json.loads), matching the updated embedded example.

* chore(workflows): group experimental workflows under .archon/workflows/experimental/

Move two repo-scoped workflows that were sitting untracked at the workflow
root into a dedicated subfolder. Subfolder grouping is supported by the
loader (1 level deep, resolution by filename), so workflow names are
unchanged and the /release skill still resolves archon-release correctly.

Files moved:
- archon-fix-github-issue-experimental.yaml — Path-A variant of the
  issue-fix workflow used today to land #1434, #1435, #1438.
- archon-release.yaml — the live release workflow used by the /release
  skill end-to-end (validate -> binary smoke -> version bump -> changelog
  -> approval -> commit -> PR -> tag -> Homebrew formula update).

* fix(workflows): export ARTIFACTS_DIR, LOG_DIR, BASE_BRANCH to bash nodes (#1387)

executeBashNode previously only merged explicit envVars on top of
process.env. The three well-known workflow directories (artifactsDir,
logDir, baseBranch) were passed as function parameters and used for
compile-time substitution of $ARTIFACTS_DIR / $LOG_DIR / $BASE_BRANCH
in the script body, but were never added to the subprocess environment.

As a result, any script that relied on shell-runtime expansion — e.g.
JSON_FILE="${ARTIFACTS_DIR}/foo.output.json" inside a heredoc, an
inherited helper script, or a `bash -c` subshell — saw the variable
unset and silently fell back to its default (typically an empty string
or "."), writing artifacts to the workflow cwd instead of the nominal
artifacts directory.

Always build subprocessEnv from process.env plus the three well-known
directories, then allow explicit envVars to override. Compile-time
substitution behavior is unchanged; existing scripts that do not
reference these variables are unaffected; user-supplied envVars still
win on conflict.

* fix(workflow): substitute $nodeId.output refs in approval messages (#1426)

* fix(workflow): substitute \$nodeId.output refs in approval messages

Approval node messages were emitted as raw strings, bypassing the
substituteNodeOutputRefs() pass that prompt/bash/loop/cancel nodes
all run. This made interactive workflows like atlas-onboard show
literal "\$gather-context.output.repo_name" placeholders to humans
at HITL gates, leaving them unable to know what they were approving.

Fix: rendered the approval.message through substituteNodeOutputRefs
once at the top of the standard approval gate path, then used the
resolved string in all 4 emission sites (safeSendMessage,
createWorkflowEvent, pauseWorkflowRun, event-emitter).

Test: new dag-executor.test case wires a structured-output upstream
node into an approval node and asserts pauseWorkflowRun receives the
substituted message ("Repo: hcr-els | App: CCELS | Port: 3012")
rather than the literal placeholders.

Repro: any workflow with an approval node whose message references
\$nodeId.output[.field]. Observed in the wild on atlas-onboard's
confirm-context HITL gate.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(workflow): extend approval-substitution test to cover all 4 emission sites

Per CodeRabbit review: the original test only verified pauseWorkflowRun
received the substituted message, but the fix touches 4 emission sites.
A future regression at safeSendMessage / createWorkflowEvent / event-emitter
would silently leave the test passing while users still saw raw $node.output
placeholders.

Adds two additional assertions:
- platform.sendMessage prompt contains substituted message + does NOT
  contain literal $gather-context.output placeholders
- The persisted approval_requested workflow event's data.message is
  substituted

Event-emitter assertion deferred (no existing pattern for spying on the
global emitter in this test file). Two of three secondary surfaces
covered closes the practical regression risk — both are user-visible
(chat prompt + audit-log event); the emitter is internal only.

Test count: 7 pass / 22 expect() (was 18). Full suite 193 pass / 353
expect() — no regressions.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(workflows): expose $LOOP_PREV_OUTPUT in loop node prompts (#1286) (#1367)

* feat(workflows): expose $LOOP_PREV_OUTPUT in loop node prompts (#1286)

Adds a new substitution variable that carries the previous loop iteration's
cleaned output into the next iteration's prompt. Empty on iteration 1; the
prior iteration's output (after stripCompletionTags) on iteration 2+.

Why: fresh_context: true loops have no way to reference what the previous
pass produced or why it failed without dragging the full session forward.
$LOOP_PREV_OUTPUT closes that gap with zero session-cost — same trust
boundary as $nodeId.output, no new external surface.

Changes:
- packages/workflows/src/executor-shared.ts: substituteWorkflowVariables
  accepts a 10th positional loopPrevOutput arg and substitutes
  $LOOP_PREV_OUTPUT (defaults to '').
- packages/workflows/src/dag-executor.ts: executeLoopNode passes
  lastIterationOutput on iteration 2+ (and explicit '' on iteration 1 /
  the first iteration of an interactive resume, since lastIterationOutput
  is a per-call variable that does not survive resume metadata).
- Unit tests: 3 new cases in executor-shared.test.ts.
- Integration tests: 2 new cases in dag-executor.test.ts verifying the
  prompt sent to the AI on iter 1 vs iter 2, and that the value reflects
  cleaned output (no <promise> tags).
- Docs: variables.md, loop-nodes.md (new "Retry-on-failure" pattern),
  CLAUDE.md variable reference.

Backward compatibility: prompts that don't reference $LOOP_PREV_OUTPUT are
unaffected. All 843 workflow tests + type-check + lint + format:check +
bun run validate pass locally.

* docs: address coderabbit review on variables/loop-nodes

- variables.md: include $LOOP_PREV_OUTPUT in substitution-order list and
  availability table to match the new variable row at line 30
- loop-nodes.md: document the interactive-resume exception where the first
  iteration after an approval-gate resume still receives an empty
  $LOOP_PREV_OUTPUT regardless of iteration number (per dag-executor.ts
  L1781-1783 where i === startIteration always clears prev output)

* docs(changelog): add Unreleased entry for $LOOP_PREV_OUTPUT (#1367 review)

* test(loop): add resume-from-approval integration test for $LOOP_PREV_OUTPUT (#1367 review)

Per maintainer-review-pr suggestion (Wirasm): two-call integration test
covering the resume-from-approval scenario.

  - Call 1: fresh interactive loop pauses at the gate after iteration 1 and
    asserts $LOOP_PREV_OUTPUT substitutes to empty on iter 1 (no prior
    output) plus the gate pause is recorded.
  - Call 2: resumed run with metadata.approval populated. The first
    resumed iteration must substitute $LOOP_PREV_OUTPUT to '', NOT to the
    paused run's iter-1 output (which lived in a different process and is
    not persisted). $LOOP_USER_INPUT still flows through as normal.

Locks the documented invariant at dag-executor.ts:1769-1772.

---------

Co-authored-by: voidborne-d <DottyEstradalco@allergist.com>

* feat(maintainer-standup): surface contributor replies since last run (#1457)

The brief was missing a key signal — when contributors reply on PRs or
issues, the maintainer wouldn't see it explicitly. Empirically reviewed
PR replies were buried under aggregate updatedAt timestamps with no
indication of WHO replied or WHAT they said.

This adds a new "Replies waiting on you" section to the daily brief,
sourced from two paginated GitHub API calls scoped by since=last_run_at:

  - /repos/{o}/{r}/issues/comments  PR + issue conversation comments
  - /repos/{o}/{r}/pulls/comments   inline code-review comments

Filters applied:
  - Skip the maintainer's own comments (gh_handle from profile.md)
  - Skip GitHub bot accounts (login ending in [bot]) — coderabbitai,
    chatgpt-codex-connector, dependabot, etc. They post a constant
    churn of automated review tooling that drowns out human replies;
    the maintainer wants the latter.

Output is grouped by PR/issue number with kind classification:
  - issue              comment on a non-PR issue
  - pr_conversation    PR conversation-level comment
  - pr_review          inline code-review comment (most actionable —
                       usually needs a code-level response, so kind
                       upgrades to pr_review whenever review comments
                       arrive on a PR that also has conversation ones)

Sorted by recency (newest reply first). Synthesizer reads
gh-data.output.replies_since_last_run and renders a section.

Verified on a backdated state.json (last_run_at = yesterday morning):
22 human replies on 22 PRs/issues, bot noise filtered (32 → 22 after
the [bot] filter). Surfaces…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant